Goto

Collaborating Authors

 ground truth data


Training deep learning based denoisers without ground truth data

Neural Information Processing Systems

Recently developed deep-learning-based denoisers often outperform state-of-the-art conventional denoisers, such as the BM3D. They are typically trained to minimizethe mean squared error (MSE) between the output image of a deep neural networkand a ground truth image. In deep learning based denoisers, it is important to use high quality noiseless ground truth data for high performance, but it is often challenging or even infeasible to obtain noiseless images in application areas such as hyperspectral remote sensing and medical imaging. In this article, we propose a method based on Stein's unbiased risk estimator (SURE) for training deep neural network denoisers only based on the use of noisy images. We demonstrate that our SURE-based method, without the use of ground truth data, is able to train deep neural network denoisers to yield performances close to those networks trained with ground truth, and to outperform the state-of-the-art denoiser BM3D. Further improvements were achieved when noisy test images were used for training of denoiser networks using our proposed SURE-based method.


Training deep learning based denoisers without ground truth data

Neural Information Processing Systems

Recently developed deep-learning-based denoisers often outperform state-of-the-art conventional denoisers, such as the BM3D. They are typically trained to minimizethe mean squared error (MSE) between the output image of a deep neural networkand a ground truth image. In deep learning based denoisers, it is important to use high quality noiseless ground truth data for high performance, but it is often challenging or even infeasible to obtain noiseless images in application areas such as hyperspectral remote sensing and medical imaging. In this article, we propose a method based on Stein's unbiased risk estimator (SURE) for training deep neural network denoisers only based on the use of noisy images. We demonstrate that our SURE-based method, without the use of ground truth data, is able to train deep neural network denoisers to yield performances close to those networks trained with ground truth, and to outperform the state-of-the-art denoiser BM3D. Further improvements were achieved when noisy test images were used for training of denoiser networks using our proposed SURE-based method.


Training deep learning based denoisers without ground truth data

Shakarim Soltanayev, Se Young Chun

Neural Information Processing Systems

Conventional denoising methods do not usually require noiseless ground truth images to perform denoising, but often require them for tuning parameters of image filters to elicit the best possible results (minimum MSE).



clarifying the paper

Neural Information Processing Systems

We would like to thank the reviewers for their comments and remarks. Reviewers #1 and #4 inquired about the quality of our method with smaller training sets. Training images Method all 10 000 1000 500 300 200 100 (10 runs) Baseline, N2C 31.60 31.59 Reviewer #1 remarked that our experiments are performed on synthetic data only. As the non-learned CBM3D method is also designed for natural images, we feel that our comparisons are fair.



Constrained Diffusion Models for Synthesizing Representative Power Flow Datasets

Hoseinpour, Milad, Dvorkin, Vladimir

arXiv.org Artificial Intelligence

--High-quality power flow datasets are essential for training machine learning models in power systems. However, security and privacy concerns restrict access to real-world data, making statistically accurate and physically consistent synthetic datasets a viable alternative. We develop a diffusion model for generating synthetic power flow datasets from real-world power grids that both replicate the statistical properties of the real-world data and ensure AC power flow feasibility. T o enforce the constraints, we incorporate gradient guidance based on the power flow constraints to steer diffusion sampling toward feasible samples. For computational efficiency, we further leverage insights from the fast decoupled power flow method and propose a variable decoupling strategy for the training and sampling of the diffusion model. These solutions lead to a physics-informed diffusion model, generating power flow datasets that outperform those from the standard diffusion in terms of feasibility and statistical similarity, as shown in experiments across IEEE benchmark systems.


Graph-based Robot Localization Using a Graph Neural Network with a Floor Camera and a Feature Rich Industrial Floor

Brämer, Dominik, Kleingarn, Diana, Urbann, Oliver

arXiv.org Artificial Intelligence

Accurate localization represents a fundamental challenge in robotic navigation. Traditional methodologies, such as Lidar or QR-code-based systems, suffer from inherent scalability and adaptability constraints, particularly in complex environments. In this work, we propose an innovative localization framework that harnesses flooring characteristics by employing graph-based representations and Graph Convolutional Networks (GCNs). Our method uses graphs to represent floor features, which helps localize the robot more accurately ( 0. 64 cm error) and more efficiently than comparing individual image features. Additionally, this approach successfully addresses the kidnapped robot problem in every frame without requiring complex filtering processes.


U-Net Based Healthy 3D Brain Tissue Inpainting

Zhang, Juexin, Weng, Ying, Chen, Ke

arXiv.org Artificial Intelligence

This paper introduces a novel approach to synthesize healthy 3D brain tissue from masked input images, specifically focusing on the task of'ASNR-MICCAI BraTS Local Synthesis of Tissue via Inpaint-ing'. Our proposed method employs a U-Net-based architecture, which is designed to effectively reconstruct the missing or corrupted regions of brain MRI scans. To enhance our model's generalization capabilities and robustness, we implement a comprehensive data augmentation strategy that involves randomly masking healthy images during training. Our model is trained on the BraTS-Local-Inpainting dataset and demonstrates the exceptional performance in recovering healthy brain tissue. The evaluation metrics employed, including Structural Similarity Index (SSIM), Peak Signal-to-Noise Ratio (PSNR), and Mean Squared Error (MSE), consistently yields impressive results. On the BraTS-Local-Inpainting validation set, our model achieved an SSIM score of 0.841, a PSNR score of 23.257, and an MSE score of 0.007. Notably, these evaluation metrics exhibit relatively low standard deviations, i.e., 0.103 for SSIM score, 4.213 for PSNR score and 0.007 for MSE score, which indicates that our model's reliability and consistency across various input scenarios. Our method also secured first place in the challenge.


Graph Neural Network-Based Predictive Modeling for Robotic Plaster Printing

Rivera, Diego Machain, Jenny, Selen Ercan, Tsai, Ping Hsun, Lloret-Fritschi, Ena, Salamanca, Luis, Perez-Cruz, Fernando, Tatsis, Konstantinos E.

arXiv.org Artificial Intelligence

This work proposes a Graph Neural Network (GNN) modeling approach to predict the resulting surface from a particle based fabrication process. The latter consists of spray-based printing of cementitious plaster on a wall and is facilitated with the use of a robotic arm. The predictions are computed using the robotic arm trajectory features, such as position, velocity and direction, as well as the printing process parameters. The proposed approach, based on a particle representation of the wall domain and the end effector, allows for the adoption of a graph-based solution. The GNN model consists of an encoder-processor-decoder architecture and is trained using data from laboratory tests, while the hyperparameters are optimized by means of a Bayesian scheme. The aim of this model is to act as a simulator of the printing process, and ultimately used for the generation of the robotic arm trajectory and the optimization of the printing parameters, towards the materialization of an autonomous plastering process. The performance of the proposed model is assessed in terms of the prediction error against unseen ground truth data, which shows its generality in varied scenarios, as well as in comparison with the performance of an existing benchmark model. The results demonstrate a significant improvement over the benchmark model, with notably better performance and enhanced error scaling across prediction steps.